Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Med Phys ; 51(3): 1597-1616, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38227833

RESUMO

BACKGROUND: Multislice spiral computed tomography (MSCT) requires an interpolation between adjacent detector rows during backprojection. Not satisfying the Nyquist sampling condition along the z-axis results in aliasing effects, also known as windmill artifacts. These image distortions are characterized by bright streaks diverging from high contrast structures. PURPOSE: The z-flying focal spot (zFFS) is a well-established hardware-based solution that aims to double the sampling rate in longitudinal direction and therefore reduce aliasing artifacts. However, given the technical complexity of the zFFS, this work proposes a deep learning-based approach as an alternative solution. METHODS: We propose a supervised learning approach to perform a mapping between input projections and the corresponding rows required for double sampling in the z-direction. We present a comprehensive evaluation using both a clinical dataset obtained using raw data from 40 real patient scans acquired with zFFS and a synthetic dataset consisting of 100 simulated spiral scans using a phantom specifically designed for our problem. For the clinical dataset, we utilized 32 scans as training set and 8 scans as validation set, whereas for the synthetic dataset, we used 80 scans for training and 20 scans for validation purposes. Both qualitative and quantitative assessments are conducted on a test set consisting of nine real patient scans and six phantom measurements to validate the performance of our approach. A simulation study was performed to investigate the robustness against different scan configurations in terms of detector collimation and pitch value. RESULTS: In the quantitative comparison based on clinical patient scans from the test set, all network configurations show an improvement in the root mean square error (RMSE) of approximately 20% compared to neglecting the doubled longitudinal sampling by the zFFS. The results of the qualitative analysis indicate that both clinical and synthetic training data can reduce windmill artifacts through the application of a correspondingly trained network. Together with the qualitative results from the test set phantom measurements it is emphasized that a training of our method with synthetic data resulted in superior performance in windmill artifact reduction. CONCLUSIONS: Deep learning-based raw data interpolation has the potential to enhance the sampling in z-direction and thus minimize aliasing effects, as it is the case with the zFFS. Especially a training with synthetic data showed promising results. While it may not outperform zFFS, our method represents a beneficial solution for CT scanners lacking the necessary hardware components for zFFS.


Assuntos
Artefatos , Aprendizado Profundo , Humanos , Tomografia Computadorizada Espiral/métodos , Tomógrafos Computadorizados , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
2.
Med Phys ; 2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37650780

RESUMO

BACKGROUND: Due to technical constraints, dual-source dual-energy CT scans may lack spectral information in the periphery of the patient. PURPOSE: Here, we propose a deep learning-based iterative reconstruction to recover the missing spectral information outside the field of measurement (FOM) of the second source-detector pair. METHODS: In today's Siemens dual-source CT systems, one source-detector pair (referred to as A) typically has a FOM of about 50 cm, while the FOM of the other pair (referred to as B) is limited by technical constraints to a diameter of about 35 cm. As a result, dual-energy applications are currently only available within the small FOM, limiting their use for larger patients. To derive a reconstruction at B's energy for the entire patient cross-section, we propose a deep learning-based iterative reconstruction. Starting with A's reconstruction as initial estimate, it employs a neural network in each iteration to refine the current estimate according to a raw data fidelity measure. Here, the corresponding mapping is trained using simulated chest, abdomen, and pelvis scans based on a data set containing 70 full body CT scans. Finally, the proposed approach is tested on simulated and measured dual-source dual-energy scans and compared against existing reference approaches. RESULTS: For all test cases, the proposed approach was able to provide artifact-free CT reconstructions of B for the entire patient cross-section. Considering simulated data, the remaining error of the reconstructions is between 10 and 17 HU on average, which is about half as low as the reference approaches. A similar performance with an average error of 8 HU could be achieved for real phantom measurements. CONCLUSIONS: The proposed approach is able to recover missing dual-energy information for patients exceeding the small 35 cm FOM of dual-source CT systems. Therefore, it potentially allows to extend dual-energy applications to the entire-patient cross section.

3.
Eur Radiol ; 33(8): 5321-5330, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37014409

RESUMO

Since 1971 and Hounsfield's first CT system, clinical CT systems have used scintillating energy-integrating detectors (EIDs) that use a two-step detection process. First, the X-ray energy is converted into visible light, and second, the visible light is converted to electronic signals. An alternative, one-step, direct X-ray conversion process using energy-resolving, photon-counting detectors (PCDs) has been studied in detail and early clinical benefits reported using investigational PCD-CT systems. Subsequently, the first clinical PCD-CT system was commercially introduced in 2021. Relative to EIDs, PCDs offer better spatial resolution, higher contrast-to-noise ratio, elimination of electronic noise, improved dose efficiency, and routine multi-energy imaging. In this review article, we provide a technical introduction to the use of PCDs for CT imaging and describe their benefits, limitations, and potential technical improvements. We discuss different implementations of PCD-CT ranging from small-animal systems to whole-body clinical scanners and summarize the imaging benefits of PCDs reported using preclinical and clinical systems. KEY POINTS: • Energy-resolving, photon-counting-detector CT is an important advance in CT technology. • Relative to current energy-integrating scintillating detectors, energy-resolving, photon-counting-detector CT offers improved spatial resolution, improved contrast-to-noise ratio, elimination of electronic noise, increased radiation and iodine dose efficiency, and simultaneous multi-energy imaging. • High-spatial-resolution, multi-energy imaging using energy-resolving, photon-counting-detector CT has been used in investigations into new imaging approaches, including multi-contrast imaging.


Assuntos
Iodo , Tomografia Computadorizada por Raios X , Animais , Tomografia Computadorizada por Raios X/métodos , Fótons , Raios X , Imagens de Fantasmas
4.
Med Phys ; 49(8): 5038-5051, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35722721

RESUMO

PURPOSE: We aim at developing a model-based algorithm that compensates for the effect of both pulse pileup (PP) and charge sharing (CS) and evaluates the performance using computer simulations. METHODS: The proposed PCP algorithm for PP and CS compensation uses cascaded models for CS and PP we previously developed, maximizes Poisson log-likelihood, and uses an efficient three-step exhaustive search. For comparison, we also developed an LCP algorithm that combines models for a loss of counts (LCs) and CS. Two types of computer simulations, slab- and computed tomography (CT)-based, were performed to assess the performance of both PCP and LCP with 200 and 800 mA, (300 µm)2  × 1.6-mm cadmium telluride detector, and a dead-time of 23 ns. A slab-based assessment used a pair of adipose and iodine with different thicknesses, attenuated X-rays, and assessed the bias and noise of the outputs from one detector pixel; a CT-based assessment simulated a chest/cardiac scan and a head-and-neck scan using 3D phantom and noisy cone-beam projections. RESULTS: With the slab simulation, the PCP had little or no biases when the expected counts were sufficiently large, even though a probability of count loss (PCL) due to dead-time loss or PP was as high as 0.8. In contrast, the LCP had significant biases (>±2 cm of adipose) when the PCL was higher than 0.15. Biases were present with both PCP and LCP when the expected counts were less than 10-120 per datum, which was attributed to the fact that the maximum likelihood did not approach the asymptote. The noise of PCP was within 8% from the Cramér-Rao lower bounds for most cases when no significant bias was present. The two CT studies essentially agreed with the slab simulation study. PCP had little or no biases in the estimated basis line integrals, reconstructed basis density maps, and synthesized monoenergetic CT images. But the LCP had significant biases in basis line integrals when X-ray beams passed through lungs and near the body and neck contours, where the PCLs were above 0.15. As a consequence, basis density maps and monoenergetic CT images obtained by LCP had biases throughout the imaged space. CONCLUSION: We have developed the PCP algorithm that uses the PP-CS model. When the expected counts are more than 10-120 per datum, the PCP algorithm is statistically efficient and successfully compensates for the effect of the spectral distortion due to both PP and CS providing little or no biases in basis line integrals, basis density maps, and monoenergetic CT images regardless of count-rates. In contrast, the LCP algorithm, which models an LC due to pileup, produces severe biases when incident count-rates are high and the PCL is 0.15 or higher.


Assuntos
Fótons , Tomografia Computadorizada por Raios X , Simulação por Computador , Imagens de Fantasmas , Radiografia , Tomografia Computadorizada por Raios X/métodos
5.
Med Phys ; 49(8): 5014-5037, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35651302

RESUMO

BACKGROUND: Various clinical studies show the potential for a wider quantitative role of diagnostic X-ray computed tomography (CT) beyond size measurements. Currently, the clinical use of attenuation values is, however, limited due to their lack of robustness. This issue can be observed even on the same scanner across patient size and positioning. There are different causes for the lack of robustness in the attenuation values; one possible source of error is beam hardening of the X-ray source spectrum. The conventional and well-established approach to address this issue is a calibration-based single material beam hardening correction (BHC) using a water cylinder. PURPOSE: We investigate an alternative approach for single-material BHC with the aim of producing a more robust result for the attenuation values. The underlying hypothesis of this investigation is that calibration-based BHC automatically corrects for scattered radiation in a manner that is suboptimal in terms of bias as soon as the scanned object strongly deviates from the water cylinder used for calibration. METHODS: The approach we propose performs BHC via an analytical energy response model that is embedded into a correction pipeline that efficiently estimates and subtracts scattered radiation in a patient-specific manner prior to BHC. The estimation of scattered radiation is based on minimizing, in average, the squared difference between our corrected data and the vendor-calibrated data. The used energy response model is considering the spectral effects of the detector response and the prefiltration of the source spectrum, including a beam-shaping bowtie filter. The performance of the correction pipeline is first characterized with computer simulated data. Afterward, it is tested using real 3-D CT data sets of two different phantoms, with various kV settings and phantom positions, assuming a circular data acquisition. The results are compared in the image domain to those from the scanner. RESULTS: For experiments with a water cylinder, the proposed correction pipeline leads to similar results as the vendor. For reconstructions of a QRM liver phantom with extension ring, the proposed correction pipeline achieved a more uniform and stable outcome in the attenuation values of homogeneous materials within the phantom. For example, the root mean squared deviation between centered and off-centered phantom positioning was reduced from 6.6 to 1.8 HU in one profile. CONCLUSIONS: We have introduced a patient-specific approach for single-material BHC in diagnostic CT via the use of an analytical energy response model. This approach shows promising improvements in terms of robustness of attenuation values for large patient sizes. Our results contribute toward improving CT images so as to make CT attenuation values more reliable for use in clinical practice.


Assuntos
Tomografia Computadorizada por Raios X , Água , Algoritmos , Calibragem , Humanos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Raios X
6.
Biomed Phys Eng Express ; 8(2)2022 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-34983885

RESUMO

The problem of data truncation in Computed Tomography (CT) is caused by the missing data when the patient exceeds the Scan Field of View (SFOV) of a CT scanner. The reconstruction of a truncated scan produces severe truncation artifacts both inside and outside the SFOV. We have employed a deep learning-based approach to extend the field of view and suppress truncation artifacts. Thereby, our aim is to generate a good estimate of the real patient data and not to provide a perfect and diagnostic image even in regions beyond the SFOV of the CT scanner. This estimate could then be used as an input to higher order reconstruction algorithms [1]. To evaluate the influence of the network structure and layout on the results, three convolutional neural networks (CNNs), in particular a general CNN called ConvNet, an autoencoder, and the U-Net architecture have been investigated in this paper. Additionally, the impact of L1, L2, structural dissimilarity and perceptual loss functions on the neural network's learning have been assessed and evaluated. The evaluation of data set comprising 12 truncated test patients demonstrated that the U-Net in combination with the structural dissimilarity loss showed the best performance in terms of image restoration in regions beyond the SFOV of the CT scanner. Moreover, this network produced the best mean absolute error, L1, L2, and structural dissimilarity evaluation measures on the test set compared to other applied networks. Therefore, it is possible to achieve truncation artifact removal using deep learning techniques.


Assuntos
Aprendizado Profundo , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos
7.
Med Phys ; 49(3): 1495-1506, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34822186

RESUMO

PURPOSE: A motion compensation method that is aimed at correcting motion artifacts of cardiac valves is proposed. The primary focus is the aortic valve. METHODS: The method is based around partial angle reconstructions and a cost function including the image entropy. A motion model is applied to approximate the cardiac motion in the temporal and spatial domain. Based on characteristic values for velocities and strain during cardiac motion, penalties for the velocity and spatial derivatives are introduced to maintain anatomically realistic motion vector fields and avoid distortions. The model addresses global elastic deformation, but not the finer and more complicated motion of the valve leaflets. RESULTS: The method is verified based on clinical data. Image quality was improved for most artifact-impaired reconstructions. An image quality study with Likert scoring of the motion artifact severity on a scale from 1 (highest image quality) to 5 (lowest image quality/extreme artifact presence) was performed. The biggest improvements after applying motion compensation were achieved for strongly artifact-impaired initial images scoring 4 and 5, resulting in an average change of the scores by - 0.59 ± 0.06 $-0.59\pm 0.06$ and - 1.33 ± 0.03 $-1.33\pm 0.03$ , respectively. In the case of artifact-free images, a chance to introduce blurring was observed and their average score was raised by 0.42 ± 0.03. CONCLUSION: Motion artifacts were consistently removed and image quality improved.


Assuntos
Valva Aórtica , Processamento de Imagem Assistida por Computador , Algoritmos , Valva Aórtica/diagnóstico por imagem , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Movimento (Física) , Tomografia Computadorizada por Raios X
8.
Med Phys ; 48(9): 4824-4842, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34309837

RESUMO

PURPOSE: Dual-source computed tomography (DSCT) uses two source-detector pairs offset by about 90°. In addition to the well-known forward scatter, a special issue in DSCT is cross-scattered radiation from X-ray tube A detected in the detector of system B and vice versa. This effect can lead to artifacts and reduction of the contrast-to-noise ratio of the images. The purpose of this work is to present and evaluate different deep learning-based methods for scatter correction in DSCT. METHODS: We present different neural network-based methods for forward and cross-scatter correction in DSCT. These deep scatter estimation (DSE) methods mainly differ in the input and output information that is provided for training and inference and in whether they operate on two-dimensional (2D) or on three-dimensional (3D) data. The networks are trained and validated with scatter distributions obtained by our in-house Monte Carlo simulation. The simulated geometry is adapted to a realistic clinical setup. RESULTS: All DSE approaches reduce scatter-induced artifacts and lead to superior results than the measurement-based scatter correction. Forward scatter, under the presence of cross-scatter, is best estimated either by our network that uses the current projection and a couple of neighboring views (fDSE 2D few views) or by our 3D network that processes all projections simultaneously (fDSE 3D). Cross-scatter, under the presence of forward scatter, is best estimated using xSSE XDSE 2D, with xSSE referring to a quick single scatter estimate of cross scatter, or by xDSE 3D that uses all projections simultaneously. By using our proposed networks, the total scatter error in dual could be reduced from about 18 HU to approximately 3 HU. CONCLUSIONS: Deep learning-based scatter correction can reduce scatter artifacts in DSCT. To achieve more accurate cross-scatter estimations, the use of a cross-scatter approximation improves the results. Also, the ability to leverage across different projection angles improves the precision of the algorithm.


Assuntos
Aprendizado Profundo , Algoritmos , Artefatos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Espalhamento de Radiação , Tomografia Computadorizada por Raios X
9.
Med Phys ; 48(7): 3559-3571, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33959983

RESUMO

PURPOSE: During a typical cardiac short scan, the heart can move several millimeters. As a result, the corresponding CT reconstructions may be corrupted by motion artifacts. Especially the assessment of small structures, such as the coronary arteries, is potentially impaired by the presence of these artifacts. In order to estimate and compensate for coronary artery motion, this manuscript proposes the deep partial angle-based motion compensation (Deep PAMoCo). METHODS: The basic principle of the Deep PAMoCo relies on the concept of partial angle reconstructions (PARs), that is, it divides the short scan data into several consecutive angular segments and reconstructs them separately. Subsequently, the PARs are deformed according to a motion vector field (MVF) such that they represent the same motion state and summed up to obtain the final motion-compensated reconstruction. However, in contrast to prior work that is based on the same principle, the Deep PAMoCo estimates and applies the MVF via a deep neural network to increase the computational performance as well as the quality of the motion compensated reconstructions. RESULTS: Using simulated data, it could be demonstrated that the Deep PAMoCo is able to remove almost all motion artifacts independent of the contrast, the radius and the motion amplitude of the coronary artery. In any case, the average error of the CT values along the coronary artery is about 25 HU while errors of up to 300 HU can be observed if no correction is applied. Similar results were obtained for clinical cardiac CT scans where the Deep PAMoCo clearly outperforms state-of-the-art coronary artery motion compensation approaches in terms of processing time as well as accuracy. CONCLUSIONS: The Deep PAMoCo provides an efficient approach to increase the diagnostic value of cardiac CT scans even if they are highly corrupted by motion.


Assuntos
Vasos Coronários , Aprendizado Profundo , Algoritmos , Artefatos , Vasos Coronários/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Movimento (Física) , Imagens de Fantasmas , Tomografia Computadorizada por Raios X
10.
Med Phys ; 48(7): 3479-3499, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33838055

RESUMO

PURPOSE: In this work, we explore the potential of region-of-interest (ROI) imaging in x-ray computed tomography (CT). Using two dynamic beam attenuator (DBA) concepts for fluence field modulation (FFM) previously developed, we investigate and evaluate the potential dose savings in comparison with current FFM technology. METHODS: ROI imaging is a special application of FFM where the bulk of x-ray radiation is propagated toward a certain anatomical target (ROI), specified by the imaging task, while the surrounding tissue is spared from radiation. We introduce a criterion suitable to quantitatively describe the balance between image quality inside an ROI and total radiation dose with respect to a given ROI imaging task. It accounts for the mean image variance at the ROI and the effective patient dose calculated from Monte Carlo simulations. The criterion is further used to compile task-specific DBA trajectories determining the primary x-ray fluence, and eventually used for comparing different FFM techniques, namely the sheet-based dynamic beam attenuator (sbDBA), the z-aligned sbDBA (z-sbDBA), and an adjustable static operation mode of the z-sbDBA. Furthermore, two static bowtie filters and the influence of tube current modulation (TCM) are included in the comparison. RESULTS: Our findings demonstrate by simulations that the presented trajectory optimization method determines reasonable DBA trajectories. The influence of TCM is strongly depending on the imaging task. The narrow bowtie filter allows for dose reductions of about 10% compared to the regular bowtie filter in the considered ROI imaging tasks. The DBAs are shown to realize substantially larger dose reductions. In our cardiac imaging scenario, the DBAs can reduce the effective dose by about 30% (z-sbDBA) or 60% (sbDBA). We can further verify that the noise characteristics are not adversely affected by the DBAs. CONCLUSION: Our research demonstrates that ROI imaging using the presented DBA concepts is a promising technique toward a more patient- and task-specific CT imaging requiring lower radiation dose. Both the sbDBA and the z-sbDBA are potential technical solutions for realizing ROI imaging in x-ray CT.


Assuntos
Tecnologia , Tomografia Computadorizada por Raios X , Humanos , Método de Monte Carlo , Imagens de Fantasmas , Doses de Radiação , Raios X
11.
Med Phys ; 47(10): 4827-4837, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32754971

RESUMO

PURPOSE: We present a new concept for dynamic fluence field modulation (FFM) in x-ray computed tomography (CT). The so-called z-aligned sheet-based dynamic beam attenuator (z-sbDBA) is developed to dynamically compensate variations in patient attenuation across the fan beam and the projection angle. The goal is to enhance image quality and to reduce patient radiation dose. METHODS: The z-sbDBA consists of an array of attenuation sheets aligned along the z direction. In neutral position, the array is focused toward the focal spot. Tilting the z-sbDBA defocuses the sheets, thus reducing the transmission for larger fan beam angles. The structure of the z-sbDBA significantly differs from the previous sheet-based dynamic beam attenuator (sbDBA) in two features: (a) The sheets of the z-sbDBA are aligned parallel to the detector rows, and (b) the height of the sheets increases from the center toward larger fan beam angles. We built a motor actuated prototype of the z-sbDBA integrated into a clinical CT scanner. In experiments, we investigated its feasibility for FFM. We compared the z-sbDBA to common CT bowtie filters in terms of the spectral dependency of the transmission and possible image variance distribution in reconstructed phantom images. Additionally, the potential radiation dose saving using z-sbDBA for region-of-interest (ROI) imaging was studied. RESULTS: Our experimental results confirm that the z-sbDBA can realize variable transmission profiles of the radiation fluence by only small tilts. Compared to the sbDBA, the z-sbDBA can mitigate some practical and mechanical issues. In comparison to bowtie filters, the spectral dependency is considerably reduced when using the z-sbDBA. Likewise, more homogeneous image variance distributions can be attained in reconstructed phantom images. The z-sbDBA allows controlling the spatial image variance distribution which makes it suitable for ROI imaging. Our comparison on ROI imaging reveals skin dose reductions of up to 35% at equal ROI image quality by using the z-sbDBA. CONCLUSION: Our new concept for FFM in x-ray CT, the z-sbDBA, was experimentally validated on a clinical CT scanner. It facilitates dynamic FFM by realizing variable transmission profiles across the fan beam angle on a projection-wise basis. This key feature allows for substantial improvements in image quality, a reduction in patient radiation dose, and additionally provides a technical solution for ROI imaging.


Assuntos
Tomografia Computadorizada por Raios X , Humanos , Imagens de Fantasmas , Doses de Radiação , Raios X
12.
Med Phys ; 46(12): e835-e854, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31811793

RESUMO

PURPOSE: Model-based iterative reconstruction is a promising approach to achieve dose reduction without affecting image quality in diagnostic x-ray computed tomography (CT). In the problem formulation, it is common to enforce non-negative values to accommodate the physical non-negativity of x-ray attenuation. Using this a priori information is believed to be beneficial in terms of image quality and convergence speed. However, enforcing non-negativity imposes limitations on the problem formulation and the choice of optimization algorithm. For these reasons, it is critical to understand the value of the non-negativity constraint. In this work, we present an investigation that sheds light on the impact of this constraint. METHODS: We primarily focus our investigation on the examination of properties of the converged solution. To avoid any possibly confounding bias, the reconstructions are all performed using a provably converging algorithm started from a zero volume. To keep the computational cost manageable, an axial CT scanning geometry with narrow collimation is employed. The investigation is divided into five experimental studies that challenge the non-negativity constraint in various ways, including noise, beam hardening, parametric choices, truncation, and photon starvation. These studies are complemented by a sixth one that examines the effect of using ordered subsets to obtain a satisfactory approximate result within 50 iterations. All studies are based on real data, which come from three phantom scans and one clinical patient scan. The reconstructions with and without the non-negativity constraint are compared in terms of image similarity and convergence speed. In select cases, the image similarity evaluation is augmented with quantitative image quality metrics such as the noise power spectrum and closeness to a known ground truth. RESULTS: For cases with moderate inconsistencies in the data, associated with noise and bone-induced beam hardening, our results show that the non-negativity constraint offers little benefit. By varying the regularization parameters in one of the studies, we observed that sufficient edge-preserving regularization tends to dilute the value of the constraint. For cases with strong data inconsistencies, the results are mixed: the constraint can be both beneficial and deleterious; in either case, however, the difference between using the constraint or not is small relative to the overall level of error in the image. The results with ordered subsets are encouraging in that they show similar observations. In terms of convergence speed, we only observed one major effect, in the study with data truncation; this effect favored the use of the constraint, but had no impact on our ability to obtain the converged solution without constraint. CONCLUSIONS: Our results did not highlight the non-negativity constraint as being strongly beneficial for diagnostic CT imaging. Altogether, we thus conclude that in some imaging scenarios, the non-negativity constraint could be disregarded to simplify the optimization problem or to adopt other forward projection models that require complex optimization machinery to be used together with non-negativity.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Modelos Teóricos , Tomografia Computadorizada por Raios X , Algoritmos , Artefatos , Quadril/diagnóstico por imagem , Humanos , Metais , Imagens de Fantasmas , Doses de Radiação
13.
Med Phys ; 46(11): 4777-4791, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31444974

RESUMO

INTRODUCTION: In cardiac computed tomography (CT), irregular motion may lead to unique artifacts for scanners with a longitudinal collimation that does not cover the entire heart. Given partial coverage, subvolumes, or stacks, may be reconstructed and used to assemble a final CT volume. Irregular motion, for example, due to cardiac arrhythmia or breathing, may cause mismatch between neighboring stacks and therefore discontinuities within the final CT volume. The aim of this work is the removal of the discontinuities that are hereafter referred to as stack transition artifacts. METHOD AND MATERIALS: A stack transition artifact removal (STAR) is achieved using a symmetric deformable image registration. A symmetric Demons algorithm was implemented and applied to stacks to remove mismatch and therefore the stack transition artifacts. The registration can be controlled with one parameter that affects the smoothness of the deformation vector field (DVF). The latter is crucial for realistically transforming the stacks. Different smoothness settings as well as an entirely automatic parameter selection that considers the required deformation magnitude for each registration were tested with patient data. Thirteen datasets were evaluated. Simulations were performed on two additional datasets. RESULTS AND CONCLUSION: STAR considerably improved image quality while computing realistic DVFs. Discontinuities, for example, appearing as breaks or cuts in coronary arteries or cardiac valves, were removed or considerably reduced. A constant smoothing parameter that ensured satisfactory results for all datasets was found. The automatic parameter selection was able to find a proper setting for each individual dataset. Consequently, no over regularization of the DVF occurred that would unnecessarily limit the registration accuracy for cases with small deformations. The automatic parameter selection yielded the best overall results and provided a registration method for cardiac data that does not require user input.


Assuntos
Artefatos , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X
14.
Med Phys ; 46(12): 5528-5537, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31348527

RESUMO

PURPOSE: It has been a long-standing wish in computed tomography (CT) to compensate the emitted x-ray beam intensity for the patient's changing attenuation during the rotation of a CT data acquisition. The patient attenuation changes both spatially, along the fan beam angle, and temporally, between different projections. By modifying the pre-patient x-ray intensity profile according to the attenuation properties of the given object, image noise can be homogenized and dose can be delivered where it is really needed. Current state-of-the-art bowtie filters are not capable of changing attenuation profiles during the CT data acquisition. In our work, we present the sheet-based dynamic beam attenuator (sbDBA), a novel technical concept enabling dynamic shaping of the transmission profile. METHODS: The sbDBA consists of an array of closely spaced, highly attenuating metal sheets, focused toward the focal spot. Intensity modulation can be achieved by controlled defocusing of the array such that the attenuation of the x-ray fan beam depends on the fan angle. The sbDBA concept was evaluated in Monte-Carlo (MC) simulations regarding its spectral and scattering properties. A prototype of the sbDBA was installed in a clinical CT scanner and measurements evaluating the feasibility and the performance of the sbDBA concept were carried out. RESULTS: Experimental measurements on a CT scanner demonstrate the ability of the sbDBA to produce an attenuation profile that can be changed in width and location. Furthermore, the sbDBA shows constant transmission properties at various tube voltages. A small effect of the flying focal spot (FFS) position on the transmission profile can be observed. MC simulations confirm the essential properties of the sbDBA: In contrast to conventional bowtie filters, the sbDBA has almost no impact on the energy spectrum of the beam and there is negligible scatter emission toward the patient. CONCLUSIONS: A new concept for dynamic beam attenuation has been presented and its ability to dynamically shape the transmission profile has successfully been demonstrated. Advantages compared to regular bowtie filters including the lack of filter-induced beam hardening and scatter have been confirmed. The novel concept of a DBA paves the way toward region of interest (ROI) imaging and further reductions in patient dose.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Abdome/diagnóstico por imagem , Humanos , Método de Monte Carlo , Imagens de Fantasmas , Espalhamento de Radiação , Software
15.
Phys Med Biol ; 64(10): 105008, 2019 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-30965298

RESUMO

PURPOSE: To find comprehensive equations for the frequency-dependent MTF and DQE of photon counting detectors including the effect that the combination of crosstalk with an energy threshold is changing the pixel sensitivity profile and to compare the results with measurements. METHODS: The framework of probability-generating functions (PGF) is used to find a simple method to derive the MTF and the DQE directly from a Monte-Carlo model of the detection process. RESULTS: In combination with realistic model parameters for the detector, the method is used to predict the MTF and the DQE for different pixel sizes and thresholds. Particularly for small pixels, the modification of the sensitivity profile due to crosstalk substantially affects the frequency dependence of both quantities. CONCLUSION: The phenomenon of the pixel sensitivity profile, i.e. the fact that the choice of the threshold is affecting the detector sharpness, may play a substantial role in exploiting the full potential of photon counting detectors. The model compares well with measurements: with only two model parameters, the model can predict the MTF(f) and the DQE(f) for a wide range of thresholds.


Assuntos
Modelos Teóricos , Método de Monte Carlo , Fótons , Radiometria/instrumentação , Humanos , Razão Sinal-Ruído
16.
Artigo em Inglês | MEDLINE | ID: mdl-33304618

RESUMO

The aim of this study was to develop and validate a simulation platform that generates photon-counting CT images of voxelized phantoms with detailed modeling of manufacturer-specific components including the geometry and physics of the x-ray source, source filtrations, anti-scatter grids, and photon-counting detectors. The simulator generates projection images accounting for both primary and scattered photons using a computational phantom, scanner configuration, and imaging settings. Beam hardening artifacts are corrected using a spectrum and threshold dependent water correction algorithm. Physical and computational versions of a clinical phantom (ACR) were used for validation purposes. The physical phantom was imaged using a research prototype photon-counting CT (Siemens Healthcare) with standard (macro) mode, at four dose levels and with two energy thresholds. The computational phantom was imaged with the developed simulator with the same parameters and settings used in the actual acquisition. Images from both the real and simulated acquisitions were reconstructed using a reconstruction software (FreeCT). Primary image quality metrics such as noise magnitude, noise ratio, noise correlation coefficients, noise power spectrum, CT number, in-plane modulation transfer function, and slice sensitivity profiles were extracted from both real and simulated data and compared. The simulator was further evaluated for imaging contrast materials (bismuth, iodine, and gadolinium) at three concentration levels and six energy thresholds. Qualitatively, the simulated images showed similar appearance to the real ones. Quantitatively, the average relative error in image quality measurements were all less than 4% across all the measurements. The developed simulator will enable systematic optimization and evaluation of the emerging photon-counting computed tomography technology.

18.
Med Phys ; 45(11): 4822-4843, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-30136278

RESUMO

PURPOSE: Smaller pixel sizes of x-ray photon counting detectors (PCDs) benefit count rate capabilities but increase cross-talk and "double-counting" between neighboring PCD pixels. When an x-ray photon produces multiple (n) counts at neighboring (sub-)pixels and they are added during post-acquisition N × N binning process, the variance of the final PCD output-pixel will be larger than its mean. In the meantime, anti-scatter grids are placed at the pixel boundaries in most of x-ray CT systems and will decrease cross-talk between sub-pixels because the grids mask sub-pixels underneath them, block the primary x-rays, and increase the separation distance between active sub-pixels. The aim of this paper was, first, to study the PCD statistics with various N × N binning schemes and three different masking methods in the presence of cross-talks, and second, to assess one of the most fundamental performances of x-ray CT: soft tissue contrast visibility. METHODS: We used a PCD cross-talk model (Photon counting toolkit, PcTK) and produced cross-talk data between 3 × 3 neighboring sub-pixels and calculated the mean, variance, and covariance of output-pixels with each of N × N binning scheme [4 × 4 binning, 2 × 2 binning, and 1 × 1 binning (i.e., no binning)] and three different sub-pixel masking methods (no mask, 1-D mask, and 2-D mask). We then set up simulation to evaluate the soft tissue contrast visibility. X-rays of 120 kVp were attenuated by 10-40 cm-thick water, with the right side of PCDs having 0.5 cm thicker water than the left side. A pair of output-pixels across the left-right boundary were used to assess the sensitivity index (SI or d'), which typically ranges 0-1 and is a generalized signal-to-noise ratio and a statistics used in signal detection theory. RESULTS: Binning a larger number of sub-pixels resulted in larger mean counts and larger variance-to-mean ratio when the lower threshold of the energy window was lower than the half of the incident energy. Mean counts are in the order of no mask (the largest), 1-D mask, and 2-D mask but the difference in variance-to-mean ratio was small. For a given sub-pixel size and masking method, binning more sub-pixels degraded the normalized SI values but the difference between 4 × 4 binning and 1 × 1 binning was typically less than 0.06. 1-D mask provided better normalized SI values than no mask and 2-D mask for side-by-side case and the improvements were larger with fewer binnings, although the difference was less than 0.10. 2-D mask was the best for embedded case. The normalized SI values of combined binning, sub-pixel size, and masking were in the order of 1 × 1 (900 µm)2 binning, 2 × 2 (450 µm)2 binning, and 4 × 4 (225 µm)2 binning for a given masking method but the difference between each of them were typically 0.02-0.05. CONCLUSION: We have evaluated the effect of double-counting between PCD sub-pixels with various binning and masking methods. SI values were better with fewer number of binning and larger sub-pixels. The difference among various binning and masking methods, however, was typically less than 0.06, which might result in a dose penalty of 13% if the CT system were linear.


Assuntos
Fótons , Contagem de Cintilação/instrumentação
19.
Med Phys ; 45(5): 1985-1998, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29537627

RESUMO

PURPOSE: The interpixel cross-talk of energy-sensitive photon counting x-ray detectors (PCDs) has been studied and an analytical model (version 2.1) has been developed for double-counting between neighboring pixels due to charge sharing and K-shell fluorescence x-ray emission followed by its reabsorption (Taguchi K, et al., Medical Physics 2016;43(12):6386-6404). While the model version 2.1 simulated the spectral degradation well, it had the following problems that has been found to be significant recently: (1) The spectrum is inaccurate with smaller pixel sizes; (2) the charge cloud size must be smaller than the pixel size; (3) the model underestimates the spectrum/counts for 10-40 keV; and (4) the model version 2.1 cannot handlen-tuple-counting withn > 2 (i.e., triple-counting or higher). These problems are inherent to the design of the model version 2.1; therefore, we developed a new model and addressed these problems in this study. METHODS: We propose a new PCD cross-talk model (version 3.2; Pc TK for "photon counting toolkit") that is based on a completely different design concept from the previous version. It uses a numerical approach and starts with a 2-D model of charge sharing (as opposed to an analytical approach and a 1-D model with version 2.1) and addresses all of the four problems. The model takes the following factors into account: (1) shift-variant electron density of the charge cloud (Gaussian-distributed), (2) detection efficiency, (3) interactions between photons and PCDs via photoelectric effect, and (4) electronic noise. Correlated noisy PCD data can be generated using either a multivariate normal random number generator or a Poisson random number generator. The effect of the two parameters, the effective charge cloud diameter (d0 ) and pixel size (dpix ), was studied and results were compared with Monte Carlo simulations and the previous model version 2.1. Finally, a script for the workflow for CT image quality assessment has been developed, which started with a few material density images, generated material-specific sinogram (line integrals) data, noisy PCD data with spectral distortion using the model version 3.2, and reconstructed PCD- CT images for four energy windows. RESULTS: The model version 3.2 addressed all of the four problems listed above. The spectra withdpix  = 56-113 µm agreed with that of Medipix3 detector withdpix  = 55-110 µm without charge summing mode qualitatively. The counts for 10-40 keV were larger than the previous model (version 2.1) and agreed with MC simulations very well (root-mean-square difference values with model version 3.2 were decreased to 16%-67% of the values with version 2.1). There were many non-zero off-diagonal elements withn-tuple-counting withn > 2 in the normalized covariance matrix of 3 × 3 neighboring pixels. Reconstructed images showed biases and artifacts attributed to the spectral distortion due to the charge sharing and fluorescence x rays. CONCLUSION: We have developed a new PCD model for spatio-energetic cross-talk and correlation between PCD pixels. The workflow demonstrated the utility of the model for general or task-specific image quality assessments for the PCD- CT.Note: The program (Pc TK) and the workflow scripts have been made available to academic researchers. Interested readers should visit the website (pctk.jhu.edu) or contact the corresponding author.


Assuntos
Método de Monte Carlo , Fótons , Garantia da Qualidade dos Cuidados de Saúde/métodos , Tomografia Computadorizada por Raios X , Fluxo de Trabalho , Processamento de Imagem Assistida por Computador , Razão Sinal-Ruído
20.
Med Phys ; 45(1): 156-166, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29131361

RESUMO

PURPOSE: To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. METHODS: Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. RESULTS: Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. CONCLUSION: The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics.


Assuntos
Modelos Teóricos , Fótons , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...